-
Couldn't load subscription status.
- Fork 700
[ET-VK] Allocate memory for weight and activation tensors lazily #13474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking
## Motivation
Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.
## Full Context
During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:
* source data obtained from loading the model
* staging buffer
* GPU texture/buffer
The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.
### Current Order of operations
Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint
First, model file is loaded
Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)
The peak usage in mainline will be `M = 2W + A + w`
### Revised order of operations
This change revises the order of operations:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)
**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)
Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
Test Plan:
## Logging Memory Usage
Using
```
uint64_t getVmRssInKB() {
std::ifstream statusFile("/proc/self/status");
std::string l, num;
while (std::getline(statusFile, l)) {
if (l.substr(0, 5) == "VmRSS") {
size_t pos = l.find_first_of("0123456789");
num = l.substr(pos);
break;
}
}
uint64_t vmRssInKB = std::stoi(num);
return vmRssInKB;
}
uint64_t getVmaStatsInKB() {
auto stats =
vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
return vmaBlockInKB;
}
```
to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.
With changes: P1908051860 (Meta only)
```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```
WIthout changes: P1908054759 (Meta only)
```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```
It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.
Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13474
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 961f9a3 with merge base 8ef9595 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking
## Motivation
Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.
## Full Context
During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:
* source data obtained from loading the model
* staging buffer
* GPU texture/buffer
The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.
### Current Order of operations
Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint
First, model file is loaded
Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)
The peak usage in mainline will be `M = 2W + A + w`
### Revised order of operations
This change revises the order of operations:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)
**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)
Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
Test Plan:
## Logging Memory Usage
Using
```
uint64_t getVmRssInKB() {
std::ifstream statusFile("/proc/self/status");
std::string l, num;
while (std::getline(statusFile, l)) {
if (l.substr(0, 5) == "VmRSS") {
size_t pos = l.find_first_of("0123456789");
num = l.substr(pos);
break;
}
}
uint64_t vmRssInKB = std::stoi(num);
return vmRssInKB;
}
uint64_t getVmaStatsInKB() {
auto stats =
vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
return vmaBlockInKB;
}
```
to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.
With changes: P1908051860 (Meta only)
```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```
WIthout changes: P1908054759 (Meta only)
```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```
It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.
Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.
ghstack-source-id: b2d943d
Pull Request resolved: #13474
|
@SS-JIA has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…lazily"
Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking
## Motivation
Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.
## Full Context
During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:
* source data obtained from loading the model
* staging buffer
* GPU texture/buffer
The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.
### Current Order of operations
Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint
First, model file is loaded
Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)
The peak usage in mainline will be `M = 2W + A + w`
### Revised order of operations
This change revises the order of operations:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)
**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)
Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
Test Plan:
## Logging Memory Usage
Using
```
uint64_t getVmRssInKB() {
std::ifstream statusFile("/proc/self/status");
std::string l, num;
while (std::getline(statusFile, l)) {
if (l.substr(0, 5) == "VmRSS") {
size_t pos = l.find_first_of("0123456789");
num = l.substr(pos);
break;
}
}
uint64_t vmRssInKB = std::stoi(num);
return vmRssInKB;
}
uint64_t getVmaStatsInKB() {
auto stats =
vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
return vmaBlockInKB;
}
```
to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.
With changes: P1908051860 (Meta only)
```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```
WIthout changes: P1908054759 (Meta only)
```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```
It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.
Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.
Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033)
[ghstack-poisoned]
Pull Request resolved: #13474 * Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph * Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking ## Motivation Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model. ## Full Context During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points: * source data obtained from loading the model * staging buffer * GPU texture/buffer The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once. ### Current Order of operations Legend: * `W` represents total weight nbytes * `w` represents weight nbytes for one tensor * `A` represents total activations nbytes * `M` represents approximation of total memory footprint First, model file is loaded Then, when building compute graph, for each weight tensor: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`) 3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = 2W + A + w`) 2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`) 3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`) The peak usage in mainline will be `M = 2W + A + w` ### Revised order of operations This change revises the order of operations: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = W + w`) 2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`) 3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`) 4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`) **Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`) Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum. Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033/) ghstack-source-id: 303779654
|
This pull request was exported from Phabricator. Differential Revision: D80460033 |
…lazily"
Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking
## Motivation
Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.
## Full Context
During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:
* source data obtained from loading the model
* staging buffer
* GPU texture/buffer
The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.
### Current Order of operations
Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint
First, model file is loaded
Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)
The peak usage in mainline will be `M = 2W + A + w`
### Revised order of operations
This change revises the order of operations:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)
**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)
Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
Test Plan:
## Logging Memory Usage
Using
```
uint64_t getVmRssInKB() {
std::ifstream statusFile("/proc/self/status");
std::string l, num;
while (std::getline(statusFile, l)) {
if (l.substr(0, 5) == "VmRSS") {
size_t pos = l.find_first_of("0123456789");
num = l.substr(pos);
break;
}
}
uint64_t vmRssInKB = std::stoi(num);
return vmRssInKB;
}
uint64_t getVmaStatsInKB() {
auto stats =
vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
return vmaBlockInKB;
}
```
to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.
With changes: P1908051860 (Meta only)
```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```
WIthout changes: P1908054759 (Meta only)
```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```
It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.
Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.
Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033)
[ghstack-poisoned]
Pull Request resolved: #13474 * Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph * Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking ## Motivation Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model. ## Full Context During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points: * source data obtained from loading the model * staging buffer * GPU texture/buffer The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once. ### Current Order of operations Legend: * `W` represents total weight nbytes * `w` represents weight nbytes for one tensor * `A` represents total activations nbytes * `M` represents approximation of total memory footprint First, model file is loaded Then, when building compute graph, for each weight tensor: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`) 3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = 2W + A + w`) 2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`) 3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`) The peak usage in mainline will be `M = 2W + A + w` ### Revised order of operations This change revises the order of operations: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = W + w`) 2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`) 3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`) 4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`) **Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`) Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum. ghstack-source-id: 303830115 Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033/)
|
This pull request was exported from Phabricator. Differential Revision: D80460033 |
…lazily"
Summary:
* Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph
* Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking
## Motivation
Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.
## Full Context
During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:
* source data obtained from loading the model
* staging buffer
* GPU texture/buffer
The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.
### Current Order of operations
Legend:
* `W` represents total weight nbytes
* `w` represents weight nbytes for one tensor
* `A` represents total activations nbytes
* `M` represents approximation of total memory footprint
First, model file is loaded
Then, when building compute graph, for each weight tensor:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`)
3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = 2W + A + w`)
2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`)
3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`)
The peak usage in mainline will be `M = 2W + A + w`
### Revised order of operations
This change revises the order of operations:
1. Weight data is loaded from NamedDataMap (`M = W`)
2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`)
Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
1. Staging buffer initialized (`M = W + w`)
2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`)
3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`)
4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`)
**Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`)
Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum.
Test Plan:
## Logging Memory Usage
Using
```
uint64_t getVmRssInKB() {
std::ifstream statusFile("/proc/self/status");
std::string l, num;
while (std::getline(statusFile, l)) {
if (l.substr(0, 5) == "VmRSS") {
size_t pos = l.find_first_of("0123456789");
num = l.substr(pos);
break;
}
}
uint64_t vmRssInKB = std::stoi(num);
return vmRssInKB;
}
uint64_t getVmaStatsInKB() {
auto stats =
vkcompute::api::context()->adapter_ptr()->vma().get_memory_statistics();
uint64_t vmaBlockInKB = stats.total.statistics.blockBytes >> 10;
return vmaBlockInKB;
}
```
to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.
With changes: P1908051860 (Meta only)
```
Memory usage before model compilation: 1115760 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924832 KB (VmRSS), 17920 KB (VMA)
Memory usage after graph preparation: 1935312 KB (VmRSS), 17920 KB (VMA)
Memory usage prepack start: 1935312 KB, VMA Block: 17920 KB
Memory usage after prepack operations: 1372376 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1372804 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1376916 KB (VmRSS), 2330528 KB (VMA)
```
WIthout changes: P1908054759 (Meta only)
```
Memory usage before model compilation: 1114784 KB (VmRSS), 0 KB (VMA)
Memory usage after graph building: 1924432 KB (VmRSS), 962464 KB (VMA)
Memory usage after graph preparation: 1922916 KB (VmRSS), 2326432 KB (VMA)
Memory usage prepack start: 1922916 KB, VMA Block: 2326432 KB
Memory usage after prepack operations: 1359180 KB (VmRSS), 2330528 KB (VMA)
Memory usage before execute: 1359492 KB (VmRSS), 2330528 KB (VMA)
Memory usage at end of execute: 1363636 KB (VmRSS), 2330528 KB (VMA)
```
It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.
Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.
Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033)
[ghstack-poisoned]
Pull Request resolved: #13474 * Allocate memory for weight tensors right before the prepacking shader is dispatched, rather than while building the graph * Move allocation of shared objects (i.e. memory for intermediate tensors) to occur after prepacking ## Motivation Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model. ## Full Context During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points: * source data obtained from loading the model * staging buffer * GPU texture/buffer The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once. ### Current Order of operations Legend: * `W` represents total weight nbytes * `w` represents weight nbytes for one tensor * `A` represents total activations nbytes * `M` represents approximation of total memory footprint First, model file is loaded Then, when building compute graph, for each weight tensor: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized + memory allocated (`M = 2W`) 3. After building the graph, `graph->prepare()` is called which currently allocates memory for the activation tensors as well (`M = 2W + A`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = 2W + A + w`) 2. Copy CPU weight data to staging + CPU Weight data is freed (`M = 2W + A`) 3. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = 2W + A - w`) The peak usage in mainline will be `M = 2W + A + w` ### Revised order of operations This change revises the order of operations: 1. Weight data is loaded from NamedDataMap (`M = W`) 2. GPU texture/buffer for weight is initialized, but **memory is not allocated** (`M = W`) Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually: 1. Staging buffer initialized (`M = W + w`) 2. **Memory allocated for GPU texture/buffer** (`M = W + 2w`) 3. Copy CPU weight data to staging + CPU Weight data is freed (`M = W + w`) 4. Compute shader dispatch to copy staging to GPU texture/buffer + free staging buffer (`M = W`) **Then, after all prepacking operations complete, only then is Activation memory allocated** (`M = W + A`) Under this scheme, peak memory is reduced to `M = W + A` (or alternatively `M = W + 2w` if `2w > A`) which is (or at least very close to) the theoretical minimum. ghstack-source-id: 303862303 Differential Revision: [D80460033](https://our.internmc.facebook.com/intern/diff/D80460033/)
|
This pull request was exported from Phabricator. Differential Revision: D80460033 |
ac46761
into
gh/SS-JIA/292/base
Summary: It seems #13474 was not merged correctly via the cherry pick bot. This PR manually syncs internal and fbcode. [ghstack-poisoned]
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #13512 Summary: It seems #13474 was not merged correctly via the cherry pick bot. This PR manually syncs internal and fbcode.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ pytorch#13512 Summary: It seems pytorch#13474 was not merged correctly via the cherry pick bot. This PR manually syncs internal and fbcode.
Stack from ghstack (oldest at bottom):
prepare_pipelines()toprepare()#13478Summary:
Motivation
Prevent screen blackout (Llama 3.2 1B) / device crash (Llama 3.2 3B) when running Llama 3.2 models on Samsung Galaxy S24. This behaviour is related to high peak memory usage when loading the model.
Full Context
During model loading, Vulkan delegate needs to store 3 copies of constant data in memory at various points:
The general rationale of this change is to allocate memory for each copy only when necessary to minimize the "overlap" when all 3 exist at once.
Current Order of operations
Legend:
Wrepresents total weight nbyteswrepresents weight nbytes for one tensorArepresents total activations nbytesMrepresents approximation of total memory footprintFirst, model file is loaded
Then, when building compute graph, for each weight tensor:
M = W)M = 2W)graph->prepare()is called which currently allocates memory for the activation tensors as well (M = 2W + A)Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
M = 2W + A + w)M = 2W + A)M = 2W + A - w)The peak usage in mainline will be
M = 2W + A + wRevised order of operations
This change revises the order of operations:
M = W)M = W)Then, during the prepacking stage for each weight tensor, each weight tensor is copied individually:
M = W + w)M = W + 2w)M = W + w)M = W)Then, after all prepacking operations complete, only then is Activation memory allocated (
M = W + A)Under this scheme, peak memory is reduced to
M = W + A(or alternativelyM = W + 2wif2w > A) which is (or at least very close to) the theoretical minimum.Test Plan:
Logging Memory Usage
Using
to log memory footprint at various points of inference when running the llama_runner binary with Llama 3.2 1B, we can compare the memory footprint with and without these changes.
With changes: P1908051860 (Meta only)
WIthout changes: P1908054759 (Meta only)
It is evident how peak memory can be reduced with these changes, as VMA footprint gradually increases while loading the model while VmRss gradually decreases. Without these changes, VMA footprint will reach its peak after initializing the graph.
Visually, it can also be verified that Samsung Galaxy S24's screen no longer blacks out while loading the model.
Differential Revision: D80460033